443 research outputs found
Towards Zero-Waste Furniture Design
In traditional design, shapes are first conceived, and then fabricated. While
this decoupling simplifies the design process, it can result in inefficient
material usage, especially where off-cut pieces are hard to reuse. The
designer, in absence of explicit feedback on material usage remains helpless to
effectively adapt the design -- even though design variabilities exist. In this
paper, we investigate {\em waste minimizing furniture design} wherein based on
the current design, the user is presented with design variations that result in
more effective usage of materials. Technically, we dynamically analyze material
space layout to determine {\em which} parts to change and {\em how}, while
maintaining original design intent specified in the form of design constraints.
We evaluate the approach on simple and complex furniture design scenarios, and
demonstrate effective material usage that is difficult, if not impossible, to
achieve without computational support
Colored fused filament fabrication
Fused filament fabrication is the method of choice for printing 3D models at
low cost and is the de-facto standard for hobbyists, makers, and schools.
Unfortunately, filament printers cannot truly reproduce colored objects. The
best current techniques rely on a form of dithering exploiting occlusion, that
was only demonstrated for shades of two base colors and that behaves
differently depending on surface slope.
We explore a novel approach for 3D printing colored objects, capable of
creating controlled gradients of varying sharpness. Our technique exploits
off-the-shelves nozzles that are designed to mix multiple filaments in a small
melting chamber, obtaining intermediate colors once the mix is stabilized.
We apply this property to produce color gradients. We divide each input layer
into a set of strata, each having a different constant color. By locally
changing the thickness of the stratum, we change the perceived color at a given
location. By optimizing the choice of colors of each stratum, we further
improve quality and allow the use of different numbers of input filaments.
We demonstrate our results by building a functional color printer using low
cost, off-the-shelves components. Using our tool a user can paint a 3D model
and directly produce its physical counterpart, using any material and color
available for fused filament fabrication
From 3D Models to 3D Prints: an Overview of the Processing Pipeline
Due to the wide diffusion of 3D printing technologies, geometric algorithms
for Additive Manufacturing are being invented at an impressive speed. Each
single step, in particular along the Process Planning pipeline, can now count
on dozens of methods that prepare the 3D model for fabrication, while analysing
and optimizing geometry and machine instructions for various objectives. This
report provides a classification of this huge state of the art, and elicits the
relation between each single algorithm and a list of desirable objectives
during Process Planning. The objectives themselves are listed and discussed,
along with possible needs for tradeoffs. Additive Manufacturing technologies
are broadly categorized to explicitly relate classes of devices and supported
features. Finally, this report offers an analysis of the state of the art while
discussing open and challenging problems from both an academic and an
industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and
Innovation action; Grant agreement N. 68044
Procedural band patterns
We seek to cover a parametric domain with a set of evenly spaced bands which
number and widthvaries according to a density field. We propose an implicit
procedural algorithm, that generates theband pattern from a pixel shader and
adapts to changes to the control fields in real time. Each band isuniquely
identified by an integer. This allows a wide range of texturing effects,
including specifying adifferent appearance in each individual bands. Our
technique also affords for progressive gradationsof scales, avoiding the abrupt
doubling of the number of lines of typical subdivision approaches. Thisleads to
a general approach for drawing bands, drawing splitting and merging curves, and
drawingevenly spaced streamlines. Using these base ingredients, we demonstrate
a wide variety of texturingeffects
Synthèse de textures par l’exemple pour les applications interactives
Millions of individuals explore virtual worlds every day, for entertainment, training, or to plan business trips and vacations. Video games such as Eve Online, World of Warcraft, and many others popularized their existence. Sand boxes such as Minecraft and Second Life illustrated how they can serve as a media, letting people create, share and even sell their virtual productions. Navigation and exploration software such as Google Earth and Virtual Earth let us explore a virtual version of the real world, and let us enrich it with information shared between the millions of users using these services every day.Virtual environments are massive, dynamic 3D scenes, that are explored and manipulated interactively bythousands of users simultaneously. Many challenges have to be solved to achieve these goals. Among those lies the key question of content management. How can we create enough detailed graphical content so as to represent an immersive, convincing and coherent world? Even if we can produce this data, how can we then store the terra–bytes it represents, and transfer it for display to each individual users? Rich virtual environments require a massive amount of varied graphical content, so as to represent an immersive, convincing and coherent world. Creating this content is extremely time consuming for computer artists and requires a specific set of technical skills. Capturing the data from the real world can simplify this task but then requires a large quantity of storage, expensive hardware and long capture campaigns. While this is acceptable for important landmarks (e.g. the statue of Liberty in New York, the Eiffel tower in Paris) this is wasteful on generic or anonymous landscapes. In addition, in many cases capture is not an option, either because an imaginary scenery is required or because the scene to be represented no longer exists. Therefore, researchers have proposed methods to generate new content programmatically, using captured data as an example. Typically, building blocks are extracted from the example content and re–assembled to form new assets. Such approaches have been at the center of my research for the past ten years. However, algorithms for generating data programmatically only partially address the content management challenge: the algorithm generates content as a (slow) pre–process and its output has to be stored for later use. On the contrary, I have focused on proposing models and algorithms which can produce graphical content while minimizing storage. The content is either generated when it is needed for the current viewpoint, or is produced under a very compact form that can be later used for rendering. Thanks to such approaches developers gain time during content creation, but this also simplifies the distribution of the content by reducing the required data bandwidth.In addition to the core problem of content synthesis, my approaches required the development of new data-structures able to store sparse data generated during display, while enabling an efficient access. These data-structures are specialized for the massive parallelism of graphics processors. I contributed early in this domain and kept a constant focus on this area. The originality of my approach has thus been to consider simultaneously the problems of generating, storing and displaying the graphical content. As we shall see, each of these area involve different theoretical and technical backgrounds, that nicely complement each other in providing elegant solutions to content generation, management and display
- …